252 research outputs found

    Post-Processing Hierarchical Community Structures: Quality Improvements and Multi-scale View

    Get PDF
    Dense sub-graphs of sparse graphs (communities), which appear in most real-world complex networks, play an important role in many contexts. Most existing community detection algorithms produce a hierarchical structure of community and seek a partition into communities that optimizes a given quality function. We propose new methods to improve the results of any of these algorithms. First we show how to optimize a general class of additive quality functions (containing the modularity, the performance, and a new similarity based quality function we propose) over a larger set of partitions than the classical methods. Moreover, we define new multi-scale quality functions which make it possible to detect the different scales at which meaningful community structures appear, while classical approaches find only one partition.Comment: 12 Pages, 4 figure

    Computing communities in large networks using random walks

    Full text link
    Dense subgraphs of sparse graphs (communities), which appear in most real-world complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advantages: it captures well the community structure in a network, it can be computed efficiently, it works at various scales, and it can be used in an agglomerative algorithm to compute efficiently the community structure of a network. We propose such an algorithm which runs in time O(mn^2) and space O(n^2) in the worst case, and in time O(n^2log n) and space O(n^2) in most real-world cases (n and m are respectively the number of vertices and edges in the input graph). Experimental evaluation shows that our algorithm surpasses previously proposed ones concerning the quality of the obtained community structures and that it stands among the best ones concerning the running time. This is very promising because our algorithm can be improved in several ways, which we sketch at the end of the paper.Comment: 15 pages, 4 figure

    Numerical and Computational Strategy for Pressure-Driven Steady-State Simulation of Oilfield Production

    Get PDF
    Within the TINA (Transient Integrated Network Analysis) research project and in partnership with Total, IFP is developing a new generation of simulation tool for flow assurance studies. This integrated simulation software will be able to perform multiphase simulations from the wellbore to the surface facilities. The purpose of this paper is to define, in a CAPE-OPEN compliant environment, a numerical and computational strategy for solving pressure-driven steady-state simulation problems, i.e. pure simulation and design problems, in the specific context of hydrocarbon production and transport from the wellbore to the surface facilities

    Relation Between Conductivity and Ion Content in Urban Wastewater

    Get PDF
    Wastewater conductivity has been monitored for extended periods of time by in situ probes and on grabbed samples in four communities (from 1,000 to 350,000 PE). In parallel, the concentrations of the main ionic contributors, such as calcium, sodium, potassium, magnesium, ammonium, ortho-phosphate, chloride and sulphate have been measured and their variations with respect to time compared to human activity patterns. It appears that sodium, potassium, ammonium and ortho-phosphate, which contribute to about 34% to wastewater conductivity, exhibit diurnal variations in phase with human activity evaluated by absorbance at 254 nm. However calcium (≈ 22% of wastewater conductivity) is out‑of-phase. This release, ahead of the one of other cations and anions, could be related to sewer concrete corrosion or to groundwater infiltration. The combination of these different ionic contributions creates a conductivity pattern which cannot be easily related to human activity. It makes difficult to integrate conductivity in a monitoring system able to detect ion-related abnormalities in wastewater quality.Les variations de la conductivité d’eaux usées urbaines ont été suivies sur de longues durées a l’aide de sondes placées in situ en entrée d’installations de traitement et sur des échantillons prélevés automatiquement. Quatre communautés (entre 1 000 et 350 000 habitants) ont été choisies pour cette étude. On a également dosé sur les échantillons les principaux ions (calcium, sodium, potassium, magnésium, ammonium, ortho-phosphates, chlorures et sulfates). Il apparait que le sodium, le potassium, l’ammonium et les ortho-phosphates contribuent pour 34 % a la conductivité des eaux usées et présentent des variations journalicres en phase avec la pollution carbonée soluble, estimée a partir de l’absorbance a 254 nm. Par contre, le calcium, qui contribue pour 22 % a la conductivité, présente un déphasage qui peut ztre du a son transport dans le réseau d’assainissement du fait de la corrosion des conduites en béton ou a des infiltrations d’eaux claires. Finalement, la combinaison de ces différentes contributions ioniques conduit a une variabilité de la conductivité qu’il n’est pas facile de lier a l’activité humaine, et donc a des rejets accidentels dans le cadre d’un systcme de détection de variation anormale de la qualité des eaux usées

    XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera

    Full text link
    We present a real-time approach for multi-person 3D motion capture at over 30 fps using a single RGB camera. It operates successfully in generic scenes which may contain occlusions by objects and by other people. Our method operates in subsequent stages. The first stage is a convolutional neural network (CNN) that estimates 2D and 3D pose features along with identity assignments for all visible joints of all individuals.We contribute a new architecture for this CNN, called SelecSLS Net, that uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy. In the second stage, a fully connected neural network turns the possibly partial (on account of occlusion) 2Dpose and 3Dpose features for each subject into a complete 3Dpose estimate per individual. The third stage applies space-time skeletal model fitting to the predicted 2D and 3D pose per subject to further reconcile the 2D and 3D pose, and enforce temporal coherence. Our method returns the full skeletal pose in joint angles for each subject. This is a further key distinction from previous work that do not produce joint angle results of a coherent skeleton in real time for multi-person scenes. The proposed system runs on consumer hardware at a previously unseen speed of more than 30 fps given 512x320 images as input while achieving state-of-the-art accuracy, which we will demonstrate on a range of challenging real-world scenes.Comment: To appear in ACM Transactions on Graphics (SIGGRAPH) 202

    The Stength of Weak cooperation: A Case Study on Flickr

    Full text link
    Web 2.0 works with the principle of weak cooperation, where a huge amount of individual contributions build solid and structured sources of data. In this paper, we detail the main properties of this weak cooperation by illustrating them on the photo publication website Flickr, showing the variety of uses producing a rich content and the various procedures devised by Flickr users themselves to select quality. We underlined the interaction between small and heavy users as a specific form of collective production in large social networks communities. We also give the main statistics on the (5M-users, 150M-photos) data basis we worked on for this study, collected from Flickr website using the public API

    Binary partition trees-based robust adaptive hyperspectral RX anomaly detection

    No full text
    International audienceThe Reed-Xiaoli (RX) is considered as the benchmark algorithm in multidimensional anomaly detection (AD). However, the RX detector performance decreases when the statistical parameters estimation is poor. This could happen when the background is non-homogeneous or the noise independence assumption is not fulfilled. For a better performance, the statistical parameters are estimated locally using a sliding window approach. In this approach, called adaptive RX, a window is centered over the pixel under the test (PUT), so the background mean and covariance statistics are estimated us- ing the data samples lying inside the window's spatial support, named the secondary data. Sometimes, a smaller guard window prevents those pixels close to the PUT to be used, in order to avoid the presence of outliers in the statistical estimation. The size of the window is chosen large enough to ensure the invertibility of the covariance matrix and small enough to justify both spatial and spectral homogeneity. We present here an alternative methodology to select the secondary data for a PUT by means of a binary partition tree (BPT) representation of the image. We test the proposed BPT-based adaptive hyperspectral RX AD algorithm using a real dataset provided by the Target Detection Blind Test project
    • …
    corecore